!1-regularized ensemble learning
نویسنده
چکیده
Methods that use an !1-norm to encourage model sparsity are now widely applied across many disciplines. However, aggregating such sparse models across fits to resampled data remains an open problem. Because resampling approaches have been shown to be of great utility in reducing model variance and improving variable selection, a method able to generate a single sparse solution from multiple fits to resampled data is desirable. We present a method, related to collaborative filtering and consensus optimization, that finds a sparse "global" consensus solution from many "local" models fit to subsampled data using constrained convex optimization. Called “consensus selection”, this ensemble method preserves sparsity while decreasing both model variance and model bias relative to other state-of-the-art variable selection and fitting procedures for !1-regularized models.
منابع مشابه
Diversity Regularized Ensemble Pruning
Diversity among individual classifiers is recognized to play a key role in ensemble, however, few theoretical properties are known for classification. In this paper, by focusing on the popular ensemble pruning setting (i.e., combining classifier by voting and measuring diversity in pairwise manner), we present a theoretical study on the effect of diversity on the generalization performance of v...
متن کاملDiversity Regularized Machine
Ensemble methods, which train multiple learners for a task, are among the state-of-the-art learning approaches. The diversity of the component learners has been recognized as a key to a good ensemble, and existing ensemble methods try different ways to encourage diversity, mostly by heuristics. In this paper, we propose the diversity regularized machine (DRM) in a mathematical programming frame...
متن کاملRegularized Weighted Ensemble of Deep Classifiers
Ensemble of classifiers increases the performance of the classification since the decision of many experts are fused together to generate the resultant decision for prediction making. Deep learning is a classification algorithm where along with the basic learning technique, fine tuning learning is done for improved precision of learning. Deep classifier ensemble learning is having a good scope ...
متن کاملMax-Margin Stacking and Sparse Regularization for Linear Classifier Combination and Selection
The main principle of stacked generalization (or Stacking) is using a second-level generalizer to combine the outputs of base classifiers in an ensemble. In this paper, we investigate different combination types under the stacking framework; namely weighted sum (WS), class-dependent weighted sum (CWS) and linear stacked generalization (LSG). For learning the weights, we propose using regularize...
متن کاملOn the Convergence of Leveraging
We give an unified convergence analysis of ensemble learning methods including e.g. AdaBoost, Logistic Regression and the Least-SquareBoost algorithm for regression. These methods have in common that they iteratively call a base learning algorithm which returns hypotheses that are then linearly combined. We show that these methods are related to the Gauss-Southwell method known from numerical o...
متن کامل